feat: Add database backup and restore utilities#26607
Conversation
Introduces a new utility class that provides database backup and restore functionality for OpenMetadata's CLI. It discovers all tables via information_schema, filters out GENERATED columns, dumps everything to a .tar.gz archive with metadata, and restores from such archives. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
openmetadata-service/src/main/java/org/openmetadata/service/util/DatabaseBackupRestore.java
Outdated
Show resolved
Hide resolved
openmetadata-service/src/main/java/org/openmetadata/service/util/DatabaseBackupRestore.java
Show resolved
Hide resolved
openmetadata-service/src/main/java/org/openmetadata/service/util/DatabaseBackupRestore.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Pull request overview
Adds an ops-level database backup/restore capability to OpenMetadata, wiring new backup / restore CLI commands into OpenMetadataOperations and implementing archive-based export/import via JDBI for MySQL/PostgreSQL.
Changes:
- Introduces
DatabaseBackupRestoreutility to write/read a.tar.gzbackup with per-table JSON plusmetadata.json. - Adds
backupandrestorePicocli commands toOpenMetadataOperations. - Adds unit tests for
extractDatabaseNameJDBC URL parsing.
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 9 comments.
| File | Description |
|---|---|
| openmetadata-service/src/main/java/org/openmetadata/service/util/DatabaseBackupRestore.java | Implements table/column discovery, backup archive creation, and restore logic. |
| openmetadata-service/src/main/java/org/openmetadata/service/util/OpenMetadataOperations.java | Wires backup / restore commands into the ops CLI. |
| openmetadata-service/src/test/java/org/openmetadata/service/util/DatabaseBackupRestoreTest.java | Adds unit tests for JDBC database-name extraction helper. |
| openmetadata-service/pom.xml | Adds commons-compress dependency for tar/gzip support. |
You can also share your feedback on Copilot code review. Take the survey.
| }); | ||
|
|
||
| try { | ||
| jdbi.useHandle(this::disableForeignKeyChecks); | ||
| restoreTablesFromArchive(backupPath, tablesMetadata); | ||
| LOG.info("Restore completed successfully"); | ||
| } finally { | ||
| jdbi.useHandle(this::enableForeignKeyChecks); | ||
| } |
| byte[] content = tais.readAllBytes(); | ||
| ArrayNode rows = (ArrayNode) MAPPER.readTree(content); | ||
|
|
| int offset = 0; | ||
| ArrayNode allRows = MAPPER.createArrayNode(); | ||
|
|
||
| while (true) { | ||
| String sql = | ||
| String.format( | ||
| "SELECT %s FROM %s LIMIT %d OFFSET %d", | ||
| quotedColumns, quotedTable, BATCH_SIZE, offset); |
| JsonNode val = row.get(col); | ||
| if (val == null || val.isNull()) { | ||
| batch.bindNull(col, Types.VARCHAR); | ||
| } else if (val.isNumber()) { | ||
| if (val.isLong() || val.isInt() || val.isBigInteger()) { | ||
| batch.bind(col, val.longValue()); | ||
| } else { | ||
| batch.bind(col, val.doubleValue()); | ||
| } |
| "SELECT %s FROM %s LIMIT %d OFFSET %d", | ||
| quotedColumns, quotedTable, BATCH_SIZE, offset); |
| public void restore(String backupPath, boolean force) throws IOException { | ||
| LOG.info("Starting database restore from {}", backupPath); | ||
|
|
||
| ObjectNode metadata = readMetadata(backupPath); | ||
| String backupDbType = metadata.get("databaseType").asText(); | ||
| if (!backupDbType.equals(connectionType.name())) { | ||
| throw new IllegalStateException( | ||
| String.format( | ||
| "Backup database type '%s' does not match current connection type '%s'", | ||
| backupDbType, connectionType.name())); | ||
| } | ||
|
|
||
| LOG.info( | ||
| "Backup info - version: {}, timestamp: {}, databaseType: {}", | ||
| metadata.get("version").asText(), | ||
| metadata.get("timestamp").asText(), | ||
| backupDbType); | ||
|
|
||
| ObjectNode tablesMetadata = (ObjectNode) metadata.get("tables"); | ||
|
|
||
| jdbi.useHandle( | ||
| handle -> { | ||
| if (force) { | ||
| truncateAllTables(handle, tablesMetadata); | ||
| } else { | ||
| validateTablesEmpty(handle, tablesMetadata); | ||
| } | ||
| }); | ||
|
|
||
| try { | ||
| jdbi.useHandle(this::disableForeignKeyChecks); | ||
| restoreTablesFromArchive(backupPath, tablesMetadata); | ||
| LOG.info("Restore completed successfully"); | ||
| } finally { | ||
| jdbi.useHandle(this::enableForeignKeyChecks); | ||
| } | ||
| } |
| "SELECT column_name FROM information_schema.columns " | ||
| + "WHERE table_schema = 'public' AND table_name = :table " | ||
| + "AND (is_generated = 'NEVER' OR is_generated IS NULL) " | ||
| + "AND (column_default NOT LIKE 'nextval%' OR column_default IS NULL) " |
| batch.bind(col, val.doubleValue()); | ||
| } | ||
| } else if (val.isBoolean()) { | ||
| batch.bind(col, val.booleanValue()); |
| private void disableForeignKeyChecks(Handle handle) { | ||
| if (connectionType == ConnectionType.MYSQL) { | ||
| handle.execute("SET FOREIGN_KEY_CHECKS = 0"); | ||
| } else { | ||
| handle.execute("SET session_replication_role = 'replica'"); | ||
| } | ||
| } |
…mary output Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds a test-migration command to OpenMetadataOperations that: - Restores a backup, then runs pending migrations one by one - For each migration with a test class, runs validateBefore/validateAfter - Prints a summary table to stdout for PR validation - Returns exit code 0 (all pass) or 1 (any fail) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
openmetadata-service/src/main/java/org/openmetadata/service/util/MigrationTestRunner.java
Show resolved
Hide resolved
openmetadata-service/src/main/java/org/openmetadata/service/util/MigrationTestRunner.java
Outdated
Show resolved
Hide resolved
🟡 Playwright Results — all passed (17 flaky)✅ 3387 passed · ❌ 0 failed · 🟡 17 flaky · ⏭️ 183 skipped
🟡 17 flaky test(s) (passed on retry)
How to debug locally# Download playwright-test-results-<shard> artifact and unzip
npx playwright show-trace path/to/trace.zip # view trace |
- Use single Handle for entire restore (FK disable/enable + inserts on same session) - Add binary data handling (val.isBinary() check) in restore to decode Base64 - Add ORDER BY via primary key discovery to prevent non-deterministic pagination - Escape embedded quotes in quoteIdentifier to prevent SQL injection - Validate archive table names against metadata to reject unknown tables - Stream backup to temp file per table to avoid OOM on large tables - Stream restore via JsonParser to avoid loading entire table into memory - Use REPEATABLE READ transaction isolation for consistent backup snapshots - Switch to positional parameters (?) to handle columns with special chars - Replace hardcoded 'public' schema with current_schema() for PostgreSQL - Use bind(idx, (Object) null) instead of bindNull with Types.VARCHAR - Fix unused pattern variable (use 'number' binding in instanceof chain) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The val.isBinary() check in insertRowsStreaming was dead code because Jackson's writeBinaryField encodes byte[] as base64 strings in JSON, and readTree deserializes them as TextNode (not BinaryNode). This caused binary data to be inserted as raw base64 strings instead of decoded bytes. Store binary column names (discovered via information_schema.columns) in the backup metadata so restore can decode base64 back to byte[] for the correct columns. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…tion, extract commons-compress version - MigrationTestRunner: replace all System.out.println/printf with LOG.info in printSummary and centerText for proper logging - MigrationTestRunner: add LOG.warn for partial migration state on failure - OpenMetadataOperations: add --force flag to test-migration command to prevent accidental table truncation (mirrors restore command pattern) - pom.xml: extract hardcoded commons-compress 1.27.1 to parent POM property Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…adata, and versionToPackage edge cases Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…flow - Handle BigDecimal in backup to prevent truncation of DECIMAL/numeric values - Add MAX_METADATA_SIZE limit and use readNBytes for defensive metadata parsing - Move discoverBinaryColumns before data write for logical grouping - Eliminate double FK disable/enable by restructuring force-restore flow - Add DatasourceConfig.initialize to backup/restore CLI commands Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add --batch-size option to backup, restore, and test-migration commands. Defaults to 1000 rows. Applies to both SQL pagination during backup and insert batching during restore. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…d AFTER tests on failure - Add SAFE_IDENTIFIER regex to quoteIdentifier() for SQL injection defense-in-depth - Use underscore separators in versionToPackage to prevent version collisions (e.g. 1.12.0 vs 1.1.20) - Skip validateAfter when migration has failed to avoid confusing results Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Code Review ✅ Approved 5 resolved / 5 findingsAdds comprehensive database backup and restore utilities with CLI commands for migration validation and testing. Resolves critical issues including FK check isolation, SQL injection via backup metadata, snapshot isolation during reads, version collision handling, and test execution safety. ✅ 5 resolved✅ Bug: FK checks disabled on separate handle, not effective for inserts
✅ Security: SQL injection via crafted backup archive table/column names
✅ Edge Case: Backup reads tables without snapshot isolation
✅ Bug: versionToPackage produces collisions for different versions
✅ Edge Case: AFTER tests still run when migration fails
OptionsAuto-apply is off → Gitar will not commit updates to this branch. Comment with these commands to change:
Was this helpful? React with 👍 / 👎 | Gitar |
There was a problem hiding this comment.
Pull request overview
This PR adds operational utilities to OpenMetadata’s Java service to support full database backup/restore into a .tar.gz archive and introduces a migration testing runner that restores a backup and executes pending migrations with optional per-version before/after assertions.
Changes:
- Add
DatabaseBackupRestoreutility (JDBI-based) and wire newbackup,restore, andtest-migrationCLI commands intoOpenMetadataOperations. - Add
MigrationTestRunnerplus a small migration test API (MigrationTestCase,TestResult) and expose migrations list via a getter onMigrationWorkflow. - Add unit tests covering JDBC DB-name extraction, identifier quoting, metadata reading, and migration version → package mapping.
Reviewed changes
Copilot reviewed 10 out of 10 changed files in this pull request and generated 10 comments.
Show a summary per file
| File | Description |
|---|---|
| pom.xml | Adds commons-compress version property used for archive support. |
| openmetadata-service/pom.xml | Adds Apache Commons Compress dependency for tar/gzip backup archives. |
| openmetadata-service/src/main/java/org/openmetadata/service/util/OpenMetadataOperations.java | Adds backup, restore, test-migration CLI commands. |
| openmetadata-service/src/main/java/org/openmetadata/service/util/DatabaseBackupRestore.java | Implements dynamic table/column discovery, tar.gz backup, and restore logic for MySQL/Postgres. |
| openmetadata-service/src/main/java/org/openmetadata/service/util/MigrationTestRunner.java | Implements restore+stepwise migration execution with before/after validation and summary output. |
| openmetadata-service/src/main/java/org/openmetadata/service/migration/api/MigrationWorkflow.java | Exposes migrations via Lombok @Getter to support the runner. |
| openmetadata-service/src/main/java/org/openmetadata/service/migration/api/MigrationTestCase.java | Defines the before/after validation interface for migration tests. |
| openmetadata-service/src/main/java/org/openmetadata/service/migration/api/TestResult.java | Adds a small result record with pass/fail helpers for assertions. |
| openmetadata-service/src/test/java/org/openmetadata/service/util/DatabaseBackupRestoreTest.java | Unit tests for DB-name extraction, identifier quoting, and metadata.json reading. |
| openmetadata-service/src/test/java/org/openmetadata/service/util/MigrationTestRunnerTest.java | Unit tests for versionToPackage parsing behavior. |
| "SELECT column_name FROM information_schema.columns " | ||
| + "WHERE table_schema = current_schema() AND table_name = :table " | ||
| + "AND (is_generated = 'NEVER' OR is_generated IS NULL) " | ||
| + "AND (column_default NOT LIKE 'nextval%' OR column_default IS NULL) " |
There was a problem hiding this comment.
In Postgres, discoverColumns filters out columns whose column_default starts with nextval (sequence-backed/serial columns). That means those columns (e.g., openmetadata_settings.id) are omitted from the backup metadata and will not be restored, which breaks “full backup/restore” semantics and can also break FK relationships if any table references those IDs. Consider including these columns in backups and, after restore, resetting sequences to max(col) (or setval) so future inserts keep working.
| + "AND (column_default NOT LIKE 'nextval%' OR column_default IS NULL) " |
| private void disableForeignKeyChecks(Handle handle) { | ||
| if (connectionType == ConnectionType.MYSQL) { | ||
| handle.execute("SET FOREIGN_KEY_CHECKS = 0"); | ||
| } else { | ||
| handle.execute("SET session_replication_role = 'replica'"); | ||
| } |
There was a problem hiding this comment.
For Postgres, SET session_replication_role = 'replica' generally requires superuser privileges. If OpenMetadata runs with a non-superuser DB user (common in managed Postgres), restore will fail before any data is loaded. Consider an alternative approach that works without superuser (e.g., restoring in FK-safe order, using deferrable constraints/SET CONSTRAINTS ALL DEFERRED where applicable, or documenting/enforcing the required privileges explicitly).
| } finally { | ||
| commitTransaction(handle); |
There was a problem hiding this comment.
The backup path wraps table reads in a transaction but always calls COMMIT in a finally block. If any SQL error occurs (especially on Postgres where the transaction becomes aborted), COMMIT will fail and may mask the original error; it also prevents a clean rollback. Consider committing only on success and issuing a ROLLBACK when an exception occurs (or use JDBI transaction helpers like useTransaction).
| } finally { | |
| commitTransaction(handle); | |
| commitTransaction(handle); | |
| } catch (Exception e) { | |
| handle.rollback(); | |
| throw e; |
| description = | ||
| "Number of rows per batch during restore. Default: " | ||
| + DatabaseBackupRestore.DEFAULT_BATCH_SIZE) |
There was a problem hiding this comment.
The --batch-size option hardcodes defaultValue = "1000" but the description references DatabaseBackupRestore.DEFAULT_BATCH_SIZE. If the constant ever changes, CLI behavior and help text will diverge. Consider defining a single source of truth (e.g., keep the help text literal, or introduce a shared constant for both).
| description = | |
| "Number of rows per batch during restore. Default: " | |
| + DatabaseBackupRestore.DEFAULT_BATCH_SIZE) | |
| description = "Number of rows per batch during restore. Default: 1000") |
| TarArchiveEntry entry; | ||
| while ((entry = tais.getNextEntry()) != null) { | ||
| String name = entry.getName(); | ||
| if (!name.startsWith("tables/") || !name.endsWith(".json")) { | ||
| continue; | ||
| } | ||
|
|
||
| String tableName = name.substring("tables/".length(), name.length() - ".json".length()); | ||
|
|
||
| if (!validTables.contains(tableName)) { | ||
| LOG.warn("Table {} from archive not in metadata, skipping", tableName); | ||
| continue; | ||
| } | ||
|
|
||
| JsonNode tableMetaNode = tablesMetadata.get(tableName); | ||
| if (tableMetaNode == null) { | ||
| LOG.warn("No metadata found for table {}, skipping", tableName); | ||
| continue; | ||
| } | ||
|
|
||
| List<String> columns = new ArrayList<>(); | ||
| tableMetaNode.get("columns").forEach(col -> columns.add(col.asText())); | ||
|
|
||
| Set<String> binaryColumns = new HashSet<>(); | ||
| JsonNode binaryColumnsNode = tableMetaNode.get("binaryColumns"); | ||
| if (binaryColumnsNode != null) { | ||
| binaryColumnsNode.forEach(col -> binaryColumns.add(col.asText())); | ||
| } | ||
|
|
||
| LOG.info("Restoring table {}", tableName); | ||
| int rowCount = insertRowsStreaming(handle, tableName, columns, binaryColumns, tais); | ||
| LOG.info("Restored table {} ({} rows)", tableName, rowCount); | ||
| } |
There was a problem hiding this comment.
restoreTablesFromArchive restores only tables that have a corresponding tables/<name>.json entry in the archive, but it does not validate that every table listed in metadata.json was actually present/restored. A truncated/corrupted backup could therefore “restore successfully” with missing tables/data. Consider tracking restored table names and failing if any tables from tablesMetadata are missing in the archive.
| .fieldNames() | ||
| .forEachRemaining( | ||
| tableName -> { | ||
| String sql = String.format("TRUNCATE TABLE %s", quoteIdentifier(tableName)); | ||
| handle.execute(sql); | ||
| LOG.info("Truncated table {}", tableName); | ||
| }); |
There was a problem hiding this comment.
truncateAllTables runs TRUNCATE TABLE <table> per table. On Postgres this can fail when there are FK dependencies between tables (TRUNCATE enforces FK reference rules unless you use CASCADE or truncate all related tables in one statement). Consider using TRUNCATE ... CASCADE for Postgres (or truncating all tables in a single TRUNCATE statement) to make --force reliable.
| int offset = 0; | ||
| while (true) { | ||
| String sql = | ||
| String.format( | ||
| "SELECT %s FROM %s%s LIMIT %d OFFSET %d", | ||
| quotedColumns, quotedTable, orderByClause, batchSize, offset); | ||
| List<Map<String, Object>> rows = handle.createQuery(sql).mapToMap().list(); | ||
|
|
||
| for (Map<String, Object> row : rows) { | ||
| gen.writeStartObject(); | ||
| for (String col : columns) { | ||
| Object val = row.get(col); | ||
| if (val == null) { | ||
| gen.writeNullField(col); | ||
| } else if (val instanceof Number number) { | ||
| if (number instanceof Long l) { | ||
| gen.writeNumberField(col, l); | ||
| } else if (number instanceof Integer i) { | ||
| gen.writeNumberField(col, i); | ||
| } else if (number instanceof Double d) { | ||
| gen.writeNumberField(col, d); | ||
| } else if (number instanceof Float f) { | ||
| gen.writeNumberField(col, f); | ||
| } else if (number instanceof BigDecimal bd) { | ||
| gen.writeNumberField(col, bd); | ||
| } else { | ||
| gen.writeNumberField(col, number.longValue()); | ||
| } | ||
| } else if (val instanceof Boolean b) { | ||
| gen.writeBooleanField(col, b); | ||
| } else if (val instanceof byte[] bytes) { | ||
| gen.writeBinaryField(col, bytes); | ||
| } else { | ||
| gen.writeStringField(col, val.toString()); | ||
| } | ||
| } | ||
| gen.writeEndObject(); | ||
| rowCount++; | ||
| } | ||
|
|
||
| if (rows.size() < batchSize) { | ||
| break; | ||
| } | ||
| offset += batchSize; |
There was a problem hiding this comment.
writeTableToTempFile paginates with LIMIT ... OFFSET ... while increasing offset by batchSize. For large tables this becomes progressively slower (and can be very slow on Postgres/MySQL) because the database must scan/skip offset rows each time. Consider keyset pagination using the primary key(s) (or a stable cursor) to page efficiently, especially since you already discover PK columns.
| int offset = 0; | |
| while (true) { | |
| String sql = | |
| String.format( | |
| "SELECT %s FROM %s%s LIMIT %d OFFSET %d", | |
| quotedColumns, quotedTable, orderByClause, batchSize, offset); | |
| List<Map<String, Object>> rows = handle.createQuery(sql).mapToMap().list(); | |
| for (Map<String, Object> row : rows) { | |
| gen.writeStartObject(); | |
| for (String col : columns) { | |
| Object val = row.get(col); | |
| if (val == null) { | |
| gen.writeNullField(col); | |
| } else if (val instanceof Number number) { | |
| if (number instanceof Long l) { | |
| gen.writeNumberField(col, l); | |
| } else if (number instanceof Integer i) { | |
| gen.writeNumberField(col, i); | |
| } else if (number instanceof Double d) { | |
| gen.writeNumberField(col, d); | |
| } else if (number instanceof Float f) { | |
| gen.writeNumberField(col, f); | |
| } else if (number instanceof BigDecimal bd) { | |
| gen.writeNumberField(col, bd); | |
| } else { | |
| gen.writeNumberField(col, number.longValue()); | |
| } | |
| } else if (val instanceof Boolean b) { | |
| gen.writeBooleanField(col, b); | |
| } else if (val instanceof byte[] bytes) { | |
| gen.writeBinaryField(col, bytes); | |
| } else { | |
| gen.writeStringField(col, val.toString()); | |
| } | |
| } | |
| gen.writeEndObject(); | |
| rowCount++; | |
| } | |
| if (rows.size() < batchSize) { | |
| break; | |
| } | |
| offset += batchSize; | |
| String sql = String.format("SELECT %s FROM %s%s", quotedColumns, quotedTable, orderByClause); | |
| Iterable<Map<String, Object>> rows = | |
| handle.createQuery(sql).setFetchSize(batchSize).mapToMap(); | |
| for (Map<String, Object> row : rows) { | |
| gen.writeStartObject(); | |
| for (String col : columns) { | |
| Object val = row.get(col); | |
| if (val == null) { | |
| gen.writeNullField(col); | |
| } else if (val instanceof Number number) { | |
| if (number instanceof Long l) { | |
| gen.writeNumberField(col, l); | |
| } else if (number instanceof Integer i) { | |
| gen.writeNumberField(col, i); | |
| } else if (number instanceof Double d) { | |
| gen.writeNumberField(col, d); | |
| } else if (number instanceof Float f) { | |
| gen.writeNumberField(col, f); | |
| } else if (number instanceof BigDecimal bd) { | |
| gen.writeNumberField(col, bd); | |
| } else { | |
| gen.writeNumberField(col, number.longValue()); | |
| } | |
| } else if (val instanceof Boolean b) { | |
| gen.writeBooleanField(col, b); | |
| } else if (val instanceof byte[] bytes) { | |
| gen.writeBinaryField(col, bytes); | |
| } else { | |
| gen.writeStringField(col, val.toString()); | |
| } | |
| } | |
| gen.writeEndObject(); | |
| rowCount++; |
| description = | ||
| "Number of rows to read/write per batch. Default: " | ||
| + DatabaseBackupRestore.DEFAULT_BATCH_SIZE) |
There was a problem hiding this comment.
The --batch-size option hardcodes defaultValue = "1000" but the description references DatabaseBackupRestore.DEFAULT_BATCH_SIZE. If the constant ever changes, CLI behavior and help text will diverge. Consider defining a single source of truth (e.g., keep the help text literal, or introduce a shared constant for both).
| description = | |
| "Number of rows to read/write per batch. Default: " | |
| + DatabaseBackupRestore.DEFAULT_BATCH_SIZE) | |
| description = "Number of rows to read/write per batch. Default: 1000") |
| names = {"--batch-size"}, | ||
| defaultValue = "1000", | ||
| description = | ||
| "Number of rows to insert per batch. Default: " | ||
| + DatabaseBackupRestore.DEFAULT_BATCH_SIZE) | ||
| int batchSize) { |
There was a problem hiding this comment.
The --batch-size option hardcodes defaultValue = "1000" but the description references DatabaseBackupRestore.DEFAULT_BATCH_SIZE. If the constant ever changes, CLI behavior and help text will diverge. Consider defining a single source of truth (e.g., keep the help text literal, or introduce a shared constant for both).
| return null; | ||
| } catch (Exception e) { | ||
| LOG.warn("Failed to instantiate test class {}", className, e); | ||
| return null; |
There was a problem hiding this comment.
If a migration test class exists but fails to instantiate (constructor throws, linkage error, etc.), loadTestCase logs a warning and returns null, which later gets reported as “(no tests)” instead of a failing test. Consider surfacing this as a failed entry in the summary so broken test implementations are not silently ignored.
| return null; | |
| } catch (Exception e) { | |
| LOG.warn("Failed to instantiate test class {}", className, e); | |
| return null; | |
| // No migration test defined for this version; treat as "no tests" | |
| return null; | |
| } catch (Exception e) { | |
| String message = String.format("Failed to instantiate migration test class %s", className); | |
| LOG.warn(message, e); | |
| throw new RuntimeException(message, e); |
|



Summary
DatabaseBackupRestoreutility class for full database backup and restore via JDBIbackupandrestoreCLI commands toOpenMetadataOperationsMigrationTestRunnerwithMigrationTestCaseinterface for migration validationtest-migrationCLI command that restores a backup, runs pending migrations one by one, and executes before/after test assertions.tar.gzarchive withmetadata.json(timestamp, version, db type, per-table info) and one JSON file per table--forceflag to truncate existing data, and handles FK constraintsUsage
Migration Test Framework
Define test classes per migration version following the naming convention:
The test-migration command outputs a summary table:
Test plan
extractDatabaseName(4 tests)versionToPackage(5 tests)--forceon non-empty database fails with table listing--forcetruncates and restores successfully